One of the goals in scaling sequential machine learning methods pertains todealing with high-dimensional data spaces. A key related challenge is that manymethods heavily depend on obtaining the inverse covariance matrix of the data.It is well known that covariance matrix estimation is problematic when thenumber of observations is relatively small compared to the number of variables.A common way to tackle this problem is through the use of a shrinkage estimatorthat offers a compromise between the sample covariance matrix and awell-conditioned matrix, with the aim of minimizing the mean-squared error. Wederived sequential update rules to approximate the inverse shrinkage estimatorof the covariance matrix. The approach paves the way for improved large-scalemachine learning methods that involve sequential updates.
展开▼